88 research outputs found

    Une architecture pour améliorer la réutilisatibilté des interfaces graphiques

    Get PDF
    National audienceGraphical interface programming remains a laborious and time-consuming exercise of the bulk of the interaction that usually has to be coded using a programming language. This code, often wordy and not very readable, impacts the reusability of the interfaces and the iterative design, even minor modifications of the user interface requiring it to be substantially modified. We propose an architectural model and an experimental toolkit that makes modifications easier.La programmation des interfaces graphiques reste un exercice laborieux et consommateur en temps l'essentiel de l'interaction devant généralement être codé au moyen d'un langage de programmation. Ce code, souvent verbeux et peu lisible, impacte la réutilisabilité des interfaces et la conception itérative, la moindre modification de l'interface nécessitant qu'il soit substantiellement modifié. Nous proposons un modèle architectural et une boîte à outils expérimentale qui facilitent les modifications et en les rendant peu coûteuses

    Nouvelles interactions physiques pour dispositifs mobiles

    Get PDF
    Soixante pour cent de la population mondiale possède aujourd'hui un téléphone portable. Les modèles récents, qui tendent à devenir de petits ordinateurs portables, sont jusqu à 10 000 fois plus puissants et 300 000 fois moins lourds que le premier ordinateur, apparu voici 60 ans. Malgré cette puissance, et les ressources offertes par les équipements dont ils sont munis (écran tactile multitouch, accéléromètres et autres capteurs...), les mobiles souffrent de certaines limitations du fait de la disparition de nombreux périphériques d'entrée classiques tels que le clavier et la souris. Afin d'augmenter la bande passante interactionnelle de ces dispositifs et de tirer partie de ces capteurs, nous nous sommes intéressé dans cette thèse aux possibilités offertes par l'interaction gestuelle en considérant deux axes de recherche : les mouvements sur le mobile et les mouvements du mobile. Après avoir proposé un espace de classification nous avons conçu et développé plusieurs techniques d'interaction gestuelle destinées à enrichir et faciliter l'interaction de l'utilisateur avec ces dispositifs.Concernant les mouvements sur le mobile, nous nous sommes intéressé à l'amélioration du Flick, une technique de défilement largement popularisée ces dernières années. Après avoir étudié son anatomie , nous avons proposé d'exploiter plusieurs ressources interactionnelles jusqu'à présent inutilisées. Ce travail a donné naissance à trois nouvelles techniques: Flick-and-brake et LongFlick, qui utilisent respectivement la pression sur l'écran et les caractéristiques du geste de lancer pour mieux contrôler le défilement, et 'Semantic Flicking' qui exploite la sémantique du document pour faciliter la lecture. Dans un second temps nous avons considéré les possibilités offertes par l'interaction 3D, en déplaçant le dispositif mobile dans l'espace. Cette voie de recherche vise à permettre d'augmenter le vocabulaire d'interaction sans parasiter les interactions tactiles déjà existantes. Tirant parti des capteurs intégrés dans les mobiles, en particulier les accéléromètres, nous avons proposé deux nouvelles techniques d'interaction 3D : TimeTilt qui utilise des gestes fluides et impulsifs pour naviguer aisément entre différentes vues et JerkTilt qui introduit la notion de gestes 'auto-délimités' pour accéder rapidement à des commandes. Ces gestes auto-délimités ayant également la particularité de pouvoir être facilement combinés avec des interactions sur l'écran, nous avons enfin considéré la combinaison des modalités offertes par les gestes bi- et tri-dimensionnels.Sixty percent of the world's population now owns a mobile phone. Recent models, which tend to become small laptops are up to 10 000 times more powerful and 300 000 times less heavy than the first computer, appeared 60 years ago. Despite this power and all the resources provided by modern equipment (touchscreen, accelerometer and other sensors...), mobile phones suffer from certain limitations due to the lack of conventional input such as keyboard and mouse. To increase the interaction bandwidth of these device and take advantage of these sensors, we investigate the potential offered by gestural interaction by considering two lines of research: movements on the mobile and movements of the mobile. After defining a classification space we conceived and developed several gestural interaction techniques to enrich and facilitate interaction between the user and these devices. For the movements on the mobile we have proposed to improve the flick, a scrolling technique widely popularized in recent years. We first studied his anatomy of this technique, then we proposed to exploit multiple interactional resources hitherto unused. This work gave birth to three new techniques: Flick-and-Brake and LongFlick, that respectively use the pressure on the screen and the characteristics of the throwing action to better control scrolling, and Semantic Flicking which leverage on document semantics to facilitate reading. In a second stage we considered the potential of 3D interaction, by moving the mobile device in the space. This line of research aims at expanding the interaction vocabulary while avoiding interferences with existing touch interactions. Leveraging on integrated sensors such as accelerometers, we proposed two new techniques for 3D interaction: TimeTilt, which uses smooth and impulsive gestures to easily navigate between different views, and JerkTilt, which introduced the notion of self-delimited gestures for accessing quickly to commands. An interesting property of these self-defined gestures is their ability to be combined with common interactions on the screen. We thus finally considered the combination of modalities offered by the two and three-dimensional gestures.PARIS-Télécom ParisTech (751132302) / SudocSudocFranceF

    Comparing Free Hand Menu Techniques for Distant Displays using Linear, Marking and Finger-Count Menus

    Get PDF
    Part 1: Long and Short PapersInternational audienceDistant displays such as interactive Public Displays (IPD) or Interactive Television (ITV) require new interaction techniques as traditional input devices may be limited or missing in these contexts. Free hand interaction, as sensed with computer vision techniques, presents a promising interaction technique. This paper presents the adaptation of three menu techniques for free hand interaction: Linear menu, Marking menu and Finger-Count menu. The first study based on a Wizard-of-OZ protocol focuses on Finger-Counting postures in front of interactive television and public displays. It reveals that participants do choose the most efficient gestures neither before nor after the experiment. Results are used to develop a Finger-Count recognizer. The second experiment shows that all techniques achieve satisfactory accuracy. It also shows that Finger-Count requires more mental demand than other techniques.</p

    Memory Manipulations in Extended Reality

    Full text link
    Human memory has notable limitations (e.g., forgetting) which have necessitated a variety of memory aids (e.g., calendars). As we grow closer to mass adoption of everyday Extended Reality (XR), which is frequently leveraging perceptual limitations (e.g., redirected walking), it becomes pertinent to consider how XR could leverage memory limitations (forgetting, distorting, persistence) to induce memory manipulations. As memories highly impact our self-perception, social interactions, and behaviors, there is a pressing need to understand XR Memory Manipulations (XRMMs). We ran three speculative design workshops (n=12), with XR and memory researchers creating 48 XRMM scenarios. Through thematic analysis, we define XRMMs, present a framework of their core components and reveal three classes (at encoding, pre-retrieval, at retrieval). Each class differs in terms of technology (AR, VR) and impact on memory (influencing quality of memories, inducing forgetting, distorting memories). We raise ethical concerns and discuss opportunities of perceptual and memory manipulations in XR

    Designing GUIs by sketch drawing and visual programming

    Full text link

    XXL: A Dual Approach for Building User Interfaces

    No full text
    This paper presents XXL, a new interactive development system for building user interfaces which is based on the concept of textual and visual equivalence. XXL includes an interactive builder and a “small ” C compatible special-purpose language that is both interpretable and compilable. The visual builder is able to establish the reverse correspondence between the dynamic objects that it manipulates and their textual descriptions in the original source code. Interactive modifications performed by using the builder result in incremental modifications of the original text. Lastly, XXL not only allows users to specify the widget part of the interface but can also be used to manage various behaviors and to create distributed interfaces

    XXL: A Visual+Textual Environment for Building Graphical User Interfaces

    No full text
    This paper presents XXL, a visual+textual environment for the automated building of graphical user interfaces. This system uses a declarative language which is a subset of the C language and can either be interpreted or compiled. It includes an interactive builder that can both handle graphical and non-graphical objects. This tool makes it possible to create highly customized interfaces by visual programming or by &quot;sketching&quot; early interface ideas that are automatically interpreted by the system to produce executable GUI objects. This builder is based on the concept of textual+visual equivalence and is able to re-edit and modify any legible source code, not only the code it itself produced. This environment is thus a truly open system that can cooperate with higher-level tools. Keywords: User interface design, interface builders, visual / textual equivalence, sketching, model-based interface development, specification languages. 1. INTRODUCTION As noted in [9], the paradigm of model-..

    Cursive Script Recognition By Backward Matching

    No full text
    This paper proposes a new model for cursive script recognition which is both analytical and global and emphasizes the role of high-level contextual information. This model is based both on a top-down recognition scheme called Backward Matching and a bottom-up feature extraction process which are working in a competitive way. The top-down recognition scheme allows multi-level correspondence between the levels of representation of the image word and those of the symbolical descriptions of the lexicon
    • …
    corecore